Hide table of contents

I'm trying to get to the crux of the differences between the progress studies (PS) and the EA / existential risk (XR) communities. I'd love input from you all on my questions below.

The road trip metaphor

Let me set up a metaphor to frame the issue:

Picture all of humanity in a car, traveling down the highway of progress. Both PS and EA/XR agree that the trip is good, and that as long as we don't crash, faster would be better. But:

  • XR thinks that the car is out of control and that we need a better grip on the steering wheel. We should not accelerate until we can steer better, and maybe we should even slow down in order to avoid crashing.
  • PS thinks we're already slowing down, and so wants to put significant attention into re-accelerating. Sure, we probably need better steering too, but that's secondary.

(See also @Max_Daniel's recent post)

My questions

Here are some things I don't really understand about the XR position (granted that I haven't read the literature on it extensively yet, but I have read a number of the foundational papers).

(Edit for clarity: these questions are not proposed as cruxes. They are just questions I am unclear on, related to my attempt to find the crux)

How does XR weigh costs and benefits?

Is there any cost that is too high to pay, for any level of XR reduction? Are they willing to significantly increase global catastrophic risk—one notch down from XR in Bostrom's hierarchy—in order to decrease XR? I do get that impression. They seem to talk about any catastrophe less than full human extinction as, well, not that big a deal.

For instance, suppose that if we accelerate progress, we can end poverty (by whatever standard) one century earlier than otherwise. In that case, failing to do so, in itself, should be considered a global catastrophic risk, or close to it. If you're willing to accept GCR in order to slightly reduce XR, then OK—but it feels to me that you've fallen for a Pascal's Mugging.

Eliezer has specifically said that he doesn't accept Pascal's Mugging arguments in the x-risk context, and Holden Karnofsky has indicated the same. The only counterarguments I've seen conclude “so AI safety (or other specific x-risk) is still a worthy cause”—which I'm fine with. I don't see how you get to “so we shouldn't try to speed up technological progress.”

Does XR consider tech progress default-good or default-bad?

My take is that tech progress is default good, but we should be watchful for bad consequences and address specific risks. I think it makes sense to pursue specific projects that might increase AI safety, gene safety, etc. I even think there are times when it makes sense to put a short-term moratorium on progress in an area in order to work out some safety issues—this has been done once or twice already in gene safety.

When I talk to XR folks, I sometimes get the impression that they want to flip it around, and consider all tech progress to be bad unless we can make an XR-based case that it should go forward. That takes me back to point (1).

What would moral/social progress actually look like?

This idea that it's more important to make progress in non-tech areas: epistemics, morality, coordination, insight, governance, whatever. I actually sort of agree with that, but I'm not sure at all that what I have in mind there corresponds to what EA/XR folks are thinking. Maybe this has been written up somewhere, and I haven't found it yet?

Without understanding this, it comes across as if tech progress is on indefinite hold until we somehow become better people and thus have sufficiently reduced XR—although it's unclear how we could ever reduce it enough, because of (1).

What does XR think about the large numbers of people who don't appreciate progress, or actively oppose it?

Returning to the road trip metaphor: while PS and EA/XR debate the ideal balance of resources towards steering vs. acceleration, and which is more neglected, there are other passengers in the car. Many are yelling to just slow down, and some are even saying to turn around and go backwards. A few, full of revolutionary zeal, are trying to jump up and seize the steering wheel in order to accomplish this, while others are trying to sabotage the car itself. Before PS and EA/XR even resolve our debate, the car might be run off the road—either as an accident caused by fighting groups, or on purpose.

This seems like a problem to me, especially in the context of (3): I don't know how we make social progress, when this is what we have to work with. So a big part of progress studies is trying to just educate more people that the car is valuable and that forward is actually where we want to go. (But I don't think anyone in EA/XR sees it this way or is sympathetic to this line of reasoning, if only because I've never heard them discuss this faction of humanity at all or recognize it as a problem.)


Thank you all for your input here! I hope that understanding these issues better will help me finally answer @Benjamin_Todd's question, which I am long overdue on addressing.

Comments37
Sorted by Click to highlight new comments since:
  • How does XR weigh costs and benefits?
    • I don't think there's a uniform "existential risk reduction movement" answer to what % of X-risk reduction people are willing to trade for large harms today, any more than there's a uniformly "effective altruism movement" answer for what the limits of altruism ought to be. For me personally, all the realistic tradeoffs I make day to day point towards reducing X-risk being of overwhelming importance, aside from model uncertainty.
      • In practice, this means I try to spend my time thinking about longtermism stuff, with mixed success, aside from a) stuff that builds my own skills or other ways that I can be more effective, b) small cooperative actions that I think  are unusually good to values systems otherwise fairly similar to my own, c) entertainment, and d) bullshit.
      • I think probability of existential risk is high enough that I'm not in practice too worried about Pascal's mugging issues. In the moments I doubt whether EA longtermism is stuff I ought to be working on, I'm much more worried about issues in the class of "randomly or systematically deluding myself about the probabilities of specific outcomes I'm worried about." Rather than issues akin to "my entire decision procedure is wrong because expected value calculations are dumb."
  • Does XR consider tech progress default-good or default-bad?
    • Basically my view is similar to what JP said. From an XR perspective, the sign of faster tech is pretty hard to tell. There are some weak arguments in favor of it being systematically good (eg that fast technological progress reduces natural risks over the long run, while not being systematically one direction or another for man-made risks). But this is pretty weak if we think the base rate of natural risks is very low. But of course I don't just have an XR perspective, and from other perspectives (as JP mentions) the benefits of technological and economic progress are clearer.
      • As an aside, I think trying to shoehorn in "belief in progress" from an xrisk perspective is kinda dubious. Analogously, I think it'd be dumb to sell kids on the value of learning music because of purported benefits to transfer learning to mathematics standardized tests. Sure there may or may not be some cognitive benefits to learning music, but that's not why I got into music, music is hardly the most efficient way to learn mathematics, and it'd be dubious if you primarily got into music to boost your math test scores.
  • What would moral/social progress actually look like?
    • I think a lot of the types of progress that some subset of EAs (including myself) are interested in is "specific technical or organizational solutions to risks that we think of as around the horizon." In terms of high-level moral/social progress, this is something we don't fully understand and still trying to have better clarity of.
      • "it comes across as if tech progress is on indefinite hold until we somehow become better people and thus have sufficiently reduced XR" I think this is assigning more impact/agency/credit/blame to either XR or EAs than is reasonable.  The rest of the world is big, and many people want to (often for selfish or narrowly altruistic reasons) indirectly work on economic progress to better their own lives, or that of people close to them. Some people like yourself also want to more broadly increase economic progress for more global reasons.
  • What does XR think about the large numbers of people who don't appreciate progress, or actively oppose it?
    • In practice I don't think much about them.
    • I think they're probably wrong about this but they're probably wrong about many other things too (as am I to be clear!), and I personally feel like (possibly overly arrogantly) that I understand their position well enough to reject them, or at least have made enough attempts at communication that further attempts aren't very useful.
    • Note that this is just a personal take, reasonable people can disagree about how valuable this type of engagement is either in general or in this specific case, and I can also imagine (tho I think it's unlikely) that people much more charitable and/or perceptive and/or diplomatic than myself can gain a lot value from those conversations.
  • Re the "highway of progress"/road trip metaphor:
    • I like the idea of a highway of progress.
    • A key distinction to me is whether you think of the highway as more of a road trip or more of a move (less metaphorically, whether the purpose is more to enjoy the journey or to go from Point A to Point B).
    • To extend and hopefully not butcher the metaphor too much, my stereotype of progress studies people is that they treat the highway of progress as pretty much like a road trip. That is, you have limited time and it's really important to have a balance of a) an appropriate fast/steady pace along this highway of progress, along with b) enjoying yourself while you're at it. Sure, it's important to avoid unlikely catastrophes like driving off a cliff, but those things are unusual and the faster you go the more sights you see, etc. So the usual balance of considerations is something like consuming now (stopping and smelling the roses, in this metaphor) vs doing more stuff so we and our near descendants can enjoy more stuff later (traveling faster to see more sights, in this metaphor).
    • My own view** of the highway of progress is that we're actually trying to go somewhere, perhaps with the intention of settling there forever. We're leaving our unpleasant homeland to travel, along a dangerous road, to somewhere glorious and wonderful, perhaps a location we previously identified as potentially great (like New York), perhaps a nice suburb somewhere along the road.
      • (There are nicer and less nice versions of this metaphor. Perhaps we're refugees from a wartorn country. Perhaps we're new college graduates convinced that we can find a better life outside of our provincial hometowns).
    • So to many longtermist EAs, the destination matters much more than the journey.
    • In this regard, I think the story/framing where "humanity is trying to take an exciting but dangerous journey to lands known and unknown" (ie, trying to reach utopia) makes  some sense as a story where exquisite care should be made.
      • Ultimately you have a lifetime to go from point A to point B, but you want to be very careful not to make rash irrevocable moves that means your journey ends prematurely.
        • existential risks can come from death (extinction risk) or from other ways that prematurely ends the journey
          • in the travel metaphor, you can be stuck in a suboptimal town and fool yourself into this being a great place to live
          • in our world, this can be dystopias or astronomical suffering, or just trapped in bad values
            • Notable that there were many past (admittedly non-existential) failures from people attempting to reach utopia
    • if you see the highway of progress as a road trip:
      • haste is really important (limited vacation time!)
      • you'd be a dumbass to be so careful  that you barely visit anywhere.
      • journey matters more than destination
        • not reaching final destination totally fine.
      • safety is important but not critical
    • if you see the highway as a mode of one-way transit
      • fine to take your time
      • have a lifetime to get there
      • when you get there much less important than getting there at all.
        • obviously you'd still want to get there eventually, but since the trade is something on the order of 1% of astronomical waste every 10 million years delay, not too important how fast you are.
  • I haven't heard of a satisfying explanation from Progress Studies folks about why urgency of economic growth is so important, at least for people who a) think there are percentage point probabilities of existential risk and b) agree with zero intrinsic discount rate.
    • Possible explanations:
      • There aren't percentage points of existential risk,
      • alternatively, the net probability that" dedicated effort from humanity averting existential risks" can actually reduce xrisks is <<1%.
      • As a special case of the above, probability of all of humanity dying in the next ~1000 years approach 1 (cf Tyler Cowen, H/T Applied Divinity Studies)
        • This is an argument for working on medium-term economic progress over working on existential risk not because the risk is too low, but it's too (unfixably) high.
      • the zero intrinsic discount rate is morally or empirically mistaken
      • temporary stagnation either inevitably or with high probability lead to permanent stagnation (so it is itself an existential risk).
      • Some combination of values pluralism + comparative advantage arguments, such that it makes some sense for some individuals to work on progress studies even if it is overall less overwhelmingly important than xrisk.
        • I find this very plausible but my general anecdotal feeling from you and others in this circle is that you're usually making much stronger claims.
      • Something else
    • Clarifying which position individuals believe may be helpful here.

 

** to be clear, you don't have to have this view to be a longtermist or an EA, but I do think this is much more common among the modal longtermist EA than the modal Progress studies fan. 

If you're willing to accept GCR in order to slightly reduce XR, then OK—but it feels to me that you've fallen for a Pascal's Mugging.

Eliezer has specifically said that he doesn't accept Pascal's Mugging arguments in the x-risk context

I wouldn't agree that this is a Pascal's Mugging. In fact, in a comment on the post you quote, Eliezer says:

If an asteroid were genuinely en route, large enough to wipe out humanity, possibly stoppable, and nobody was doing anything about this 10% probability, I would still be working on FAI but I would be screaming pretty loudly about the asteroid on the side. If the asteroid is just going to wipe out a country, I'll make sure I'm not in that country and then keep working on x-risk.

I usually think of Pascal's Mugging as centrally about cases where you have a tiny probability of affecting the world in a huge way. In contrast, your example seems to be about trading off between uncertain large-sized effects and certain medium-sized effects. ("Medium" is only meant to be relative to "large", obviously both effects are huge on some absolute scale.)

Perhaps your point is that XR can only make a tiny, tiny dent in the probability of extinction; I think most XR folks would have one of two responses:

  1. No, we can make a reasonably large dent. (This would be my response.) Off the top of my head I might say that the AI safety community as a whole could knock off ~3 percentage points from x-risk.
  2. X-risk is so over-determined (i.e. > 90%, maybe > 99%) that even though we can't affect it much, there's no other intervention that's any better (and in particular, progress studies doesn't matter because we die before it has any impact).

The other three questions you mention don't feel cruxy.

The second one (default-good vs. default-bad) doesn't really make sense to me -- I'd say something like "progress tends to increase our scope of action, which can lead to major improvements in quality of life, and also increases the size of possible risks (especially from misuse)".

I'm not making a claim about how effective our efforts can be. I'm asking a more abstract, methodological question about how we weigh costs and benefits.

If XR weighs so strongly (1e15 future lives!) that you are, in practice, willing to accept any cost (no matter how large) in order to reduce it by any expected amount (no matter how small), then you are at risk of a Pascal's Mugging.

If not, then great—we agree that we can and should weigh costs and benefits. Then it just comes down to our estimates of those things.

And so then I just want to know, OK, what's the plan? Maybe the best way to find the crux here is to dive into the specifics of what PS and EA/XR each propose to do going forward. E.g.:

  • We should invest resources in AI safety? OK, I'm good with that. (I'm a little unclear on what we can actually do there that will help at this early stage, but that's because I haven't studied it in depth, and at this point I'm at least willing to believe that there are valuable programs there. So, thumbs up.)
  • We should raise our level of biosafety at labs around the world? Yes, absolutely. I'm in. Let's do it.
  • We should accelerate moral/social progress? Sure, we absolutely need that—how would we actually do it? See question 3 above.

But when the proposal becomes: “we should not actually study progress or try to accelerate it”, I get lost. Failing to maintain and accelerate progress, in my mind, is a global catastrophic risk, if not an existential one. And it's unclear to me whether this would even increase or decrease XR, let alone the amount—in any case I think there are very wide error bars on that estimate.

But maybe that's not actually the proposal from any serious EA/XR folks? I am still unclear on this.

If XR weighs so strongly (1e15 future lives!) that you are, in practice, willing to accept any cost (no matter how large) in order to reduce it by any expected amount (no matter how small), then you are at risk of a Pascal's Mugging.

Sure. I think most longtermists wouldn't endorse this (though a small minority probably would).

But when the proposal becomes: “we should not actually study progress or try to accelerate it”, I get lost.

I don't think this is negative, I think there are better opportunities to affect the future (along the lines of Ben's comment).

I think this is mostly true of other EA / XR folks as well (or at least, if they think it is negative, they aren't confident enough in it to actually say "please stop progress in general"). As I mentioned above, people (including me) might say it is negative in specific areas, such as AGI development, but not more broadly.

And it's unclear to me whether this would even increase or decrease XR, let alone the amount—in any case I think there are very wide error bars on that estimate.

I agree with that (and I think most others would too).

OK, so maybe there are a few potential attitudes towards progress studies:

  1. It's definitely good and we should put resources to it
  2. Eh, it's fine but not really important and I'm not interested in it
  3. It is actively harming the world by increasing x-risk, and we should stop it

I've been perceiving a lot of EA/XR folks to be in (3) but maybe you're saying they're more in (2)?

Flipping it around, PS folks could have a similar (1) positive / (2) neutral / (3) negative attitude towards XR efforts. My view is not settled, but right now I'm somewhere between (1) and (2)… I think there are valuable things to do here, and I'm glad people are doing them, but I can't see it as literally the only thing worth spending any marginal resources on (which is where some XR folks have landed).

Maybe it turns out that most folks in each community are between (1) and (2) toward the other. That is, we're just disagreeing on relative priority and neglectedness.

(But I don't think that's all of it.)

Your 3 items cover good+top priority, good+not top priority, and bad+top priority, but not #4, bad+not top priority.

I think people concerned with x-risk generally think that progress studies as a program of intervention to expedite growth is going to have less expected impact (good or bad) on the history of the world per unit of effort, and if we condition on people thinking progress studies does more harm than good, then mostly they'll say it's not important enough to focus on arguing against at the current margin (as opposed to directly targeting urgent threats to the world). Only a small portion of generalized economic expansion will go to the most harmful activities (and damage there comes from expediting dangerous technologies in AI and bioweapons that we are improving in our ability to handle, so that delay would help) or to efforts to avert disaster, so there is much more leverage focusing narrowly on the most important areas. 

With respect to synthetic biology in particular, I think there is a good case for delay: right now the capacity to kill most of the world's population with bioweapons is not available in known technologies (although huge secret bioweapons programs like the old Soviet one may have developed dangerous things already), and if that capacity is delayed there is a chance it will be averted or much easier to defend against via AI, universal sequencing, and improvements in defenses and law enforcement. This is even moreso for those sub-areas that most expand bioweapon risk. That said, any attempt to discourage dangerous bioweapon-enabling research must compete against other interventions (improved lab safety, treaty support, law enforcement, countermeasure platforms, etc), and so would have to itself be narrowly targeted and leveraged. 

With respect to artificial intelligence, views on sign vary depending on whether one thinks the risk of an AI transition is getting better or worse over time (better because of developments in areas like AI alignment and transparency research, field-building, etc; or worse because of societal or geopolitical changes). Generally though people concerned with AI risk think it much more effective to fund efforts to find alignment solutions and improved policy responses (growing them from a very small base, so cost-effectiveness is relatively high) than a diffuse and ineffective effort to slow the technology (especially in a competitive world where the technology would be developed elsewhere, perhaps with higher transition risk).

For most other areas of technology and economic activity (e.g. energy, agriculture, most areas of medicine) x-risk/longtermist implications are comparatively small, suggesting a more neartermist evaluative lens (e.g. comparing more against things like GiveWell).

Long-lasting (centuries) stagnation is a risk worth taking seriously (and the slowdown of population growth that sustained superexponential growth through history until recently points to stagnation absent something like AI to ease the labor bottleneck), but seems a lot less likely than other x-risk. If you think AGI is likely this century then we will return to the superexponential track (but more explosively) and approach technological limits to exponential growth followed by polynomial expansion in space. Absent AGI or catastrophic risk (although stagnation with advanced WMD would increase such risk), permanent stagnation also looks unlikely based on the capacities of current technology given time for population to grow and reach frontier productivity.

I think the best case for progress studies being top priority would be strong focus on the current generation compared to all future generations combined, on rich country citizens vs the global poor, and on technological progress over the next few decades, rather than in 2121. But given my estimates of catastrophic risk and sense of the interventions, at the current margin I'd still think that reducing AI and biorisk do better for current people than the progress studies agenda per unit of effort.

I wouldn't support arbitrary huge sacrifices of the current generation to reduce tiny increments of x-risk, but at the current level of neglectedness and impact (for both current and future generations) averting AI and bio catastrophe  looks more impactful without extreme valuations. As such risk reduction efforts scale up marginal returns would fall and growth boosting interventions would become more competitive (with a big penalty for those couple of areas that disproportionately pose x-risk).

That said, understanding tech progress, returns to R&D, and similar issues also comes up in trying to model and influence the world in assorted ways (e.g. it's important in understanding AI risk, or building technological countermeasures to risks to long term development). I have done a fair amount of investigation that would fit into progress studies as an intellectual enterprise for such purposes.

I also lend my assistance to some neartermist EA research focused on growth, in areas that don't very disproportionately increase x-risk, and to development of technologies that make it more likely things will go better.

I've been perceiving a lot of EA/XR folks to be in (3) but maybe you're saying they're more in (2)?

Yup.

Maybe it turns out that most folks in each community are between (1) and (2) toward the other. That is, we're just disagreeing on relative priority and neglectedness.

That's what I would say.

I can't see it as literally the only thing worth spending any marginal resources on (which is where some XR folks have landed).

If you have opportunity A where you get a benefit of 200 per $ invested, and opportunity B where you get a benefit of 50 per $ invested, you want to invest in A as much as possible, until the opportunity dries up. At a civilizational scale, opportunities dry up quickly (i.e. with millions, maybe billions of dollars), so you see lots of diversity. At EA scales, this is less true.

So I do agree that some XR folks (myself included) would, if given a pot of millions of dollars to distribute, allocate it all to XR; I don't think the same people would do it for e.g. trillions of dollars. (I don't know where in the middle it changes.)

I think Open Phil, at the billions of dollars range, does in fact invest in lots of opportunities, some of which are arguably about improving progress. (Though note that they are not "fully" XR-focused, see e.g. Worldview Diversification.)

There's a variant of attitude (1) which I think is worth pointing out:

  1. b) Progress studies is good and we should put resources into it, because it is a good way to reduce X-risk on the margin.

Some arguments for (1b):

  • Progress studies helps us understand how tech progress is made, which is useful for predicting X-risk.
  • The more wealthy and stable we are as a civilization, the less likely we are to end up in arms-race type dynamics.
  • Some technologies help us deal with X-risk (e.g. mRNA for pandemic risks, or intelligence augmentation for all risks). This argument only works if PS accelerates the 'good' types of progress more than the 'bad' ones, which seems possible.

Cool to see this thread!

Just a very quick comment on this:

But when the proposal becomes: “we should not actually study progress or try to accelerate it”, I get lost.

I don't think anyone is proposing this. The debate I'm interested in is about which priorities are most pressing at the margin (i.e. creates the most value per unit of resources).

The main claim isn't that speeding up tech progress is bad,* just that it's not the top priority at the margin vs. reducing x-risk or speeding up moral progress.**

One big reason for this is that lots of institutions are already very focused on increasing economic productivity / discovering new tech (e.g. ~2% of GDP is spent on R&D), whereas almost no-one is focused on reducing x-risk.

If the amount of resources reducing xrisk grows, then it will drop in effectiveness relatively speaking.

In Toby's book, he roughly suggests that spending 0.1% of GDP on reducing x-risk is a reasonable target to aim for (about what is spent on ice cream). But that would be ~1000x more resources than today.

*Though I also think speeding up tech progress is more likely to be bad than reducing xrisk, my best guess is that it's net good.

**This assumes resources can be equally well spent on each. If someone has amazing fit with progress studies, that could make them 10-100x more effective in that area, which could outweigh the average difference in pressingness.

I'm a little unclear on what we can actually do there that will help at this early stage

I'd suggest that this is a failure of imagination (sorry, I'm really not trying to criticise you, but I can't find another phrase that captures my meaning!)

Like let's just take it for granted that we aren't going to be able to make any real research progress until we're much closer to AGI. It still seems like there are several useful things we could be doing:

• We could be helping potential researchers to understand why AI safety might be an issue so that when the time comes they aren't like "That's stupid, why would you care about that!". Note that views tend to change generationally, so you need to start here early.

• We could be supporting the careers of policy people (such by providing scholarships), so that they are more likely to be in positions of influence when the time comes.

• We could iterate on the AGI safety fundamentals course so that it is the best introduction to the issue possible at any particular time, even if we need to update it.

• We could be organising conferences, fellowships and events so that we have experienced organisers available when we need them.

• We could run research groups so that our leaders have experience in the day-to-day of these organisations and that they already have a pre-vetted team in place for when they are needed.

We could try some kinds of drills or practise instead, but I suspect that the best way to learn how to run a research group is to actually run a research group.

(I want to further suggest that if someone had offered you $1 million and asked you to figure out ways of making progress at this stage then you would have had no trouble in finding things that people could do).

As to whether my four questions are cruxy or not, that's not the point! I wasn't claiming they are all cruxes. I just meant that I'm trying to understand the crux, and these are questions I have. So, I would appreciate answers to any/all of them, in order to help my understanding. Thanks!

I kinda sorta answered Q2 above (I don't really have anything to add to it).

Q3: I'm not too clear on this myself. I'm just an object-level AI alignment researcher :P

Q4: I broadly agree this is a problem, though I think this:

Before PS and EA/XR even resolve our debate, the car might be run off the road—either as an accident caused by fighting groups, or on purpose.

seems pretty unlikely to me, where I'm interpreting it as "civilization stops making any progress and regresses to the lower quality of life from the past, and this is a permanent effect". 

I haven't thought about it much, but my immediate reaction is that it seems a lot harder to influence the world in a good way through the public, and so other actions seem better. That being said, you could search for "raising the sanity waterline" (probably more so on LessWrong than here) for some discussion of approaches to this sort of social progress (though it isn't about educating people about the value of progress in particular).

I'd guess the story might be a) 'XR primacy' (~~ that x-risk reduction has far bigger bang for one's buck than anything else re. impact) and b) conditional on a), an equivocal view on the value of technological progress: although some elements likely good, others likely bad, so the value of generally 'buying the index' of technological development (as I take Progress Studies to be keen on) to be uncertain.

"XR primacy"

Other comments have already illustrated the main points here, sparing readers from another belaboured rehearsal from me. The rough story, borrowing from the initial car analogy, is you have piles of open road/runway available if you need to use it, so velocity and acceleration are in themselves much less important than direction - you can cover much more ground in expectation if you make sure you're not headed into a crash first. 

This typically (but not necessarily, cf.) implies longtermism. 'Global catastrophic risk', as a longtermist term of art, plausibly excludes the vast majority of things common sense would call 'global catastrophes'. E.g.:

[W]e use the term “global catastrophic risks” to refer to risks that could be globally destabilizing enough to permanently worsen humanity’s future or lead to human extinction. (Open Phil)

My impression is a 'century more poverty' probably isn't a GCR in this sense. As the (pre-industrial) normal, the track record suggests it wasn't globally destabilising to humanity or human civilisation. Even moreso if the matter is of a somewhat-greater versus somewhat-lower rate in its elimination. 

This makes it's continued existence no less an outrage to human condition. But, across the scales from threats to humankind's entire future, it becomes a lower priority. Insofar as these things are traded-off (which seems to be implicit in any prioritisation given both compete for resources, whether or not there's any direct cross-purposes in activity) the currency of XR reduction has much greater value.

Per discussion, there are a variety of ways the story sketched above could be wrong:

  • Longtermist consequentialism (the typical, if not uniquely necessary motivation for the above) is false, so we our exchange rate for common sense global catastrophes (inter alia) versus XR should be higher.
  • XR is either very low, or intractable, so XR reduction isn't a good buy even on the exchange rate XR views endorse. 
  • Perhaps the promise of the future could be lost with less of a bang but a whimper. Perhaps prolonged periods of economic or stagnation should be substantial subjects of XR concern in their own right, so PS-land and XR-land converge on PS-y aspirations.

I don't see Pascalian worries as looming particularly large apart from these. XR-land typically takes the disjunction of risks and envelope of mitigation to be substantial/non-pascalian values. Although costly activity that buys an absolute risk reduction of 1/trillions looks dubious to common sense, 1/thousands + (e.g.) is commonplace (and commonsensical) when stakes are high enough. 

It's not clear how much of a strike that Pascalian counter-examples are constructable from the resources of a given view, and although the view wouldn't endorse them, it doesn't have a crisp story of decision theoretic arcana why not. Facially, PS seems susceptible to the same (e.g. a PS-ers work is worth billions per year, given the yield if you compound an (in expectation) 0.0000001% marginal increase in world GDP growth for centuries).


Buying the technological progress index?

Granting the story sketched above, there's not a straightforward upshot on whether this makes technological progress generally good or bad. The ramifications of any given technological advance for XR are hard to forecast; aggregating over all of them to get a moving average harder still. Yet there seems a lot to temper fairly unalloyed enthusiasm around technological progress I take as the typical attitude in PS-land.

  • There's obviously the appeal to the above sense of uncertainty: if at least significant bits of the technological progress portfolio credibly have very bad dividends for XR, you probably hope humanity is pretty selective and cautious in its corporate investments. It'd also generally surprise for what is best for XR to also be best for 'progress' (cf.)
  • The recent track record doesn't seem greatly reassuring. The dual-use worries around nuclear technology remain profound 70+ years after their initial development, and 'derisking' these downsides remain remote. It's hard to assess the true ex ante probability of a strategic nuclear exchange during the cold war, nor exactly how disastrous it would have been, but pricing in reasonable estimates of both probably takes a large chunk out of the generally sunny story of progress we observe ex post over the last century.
  • Insofar as folks consider disasters arising from emerging technologies (like AI) to represent the bulk of XR, this supplies concern against their rapid development in particular, and against exuberant technological development which may generate further dangers in general.

Some of this may just be a confusion of messaging (e.g. even though PS folks portray themselves as more enthusiastic and XR folks less so, both would actually be similarly un/enthusiastic for each particular case). I'd guess more of it is more substantive around the balance of promise and danger posed by given technologies (and the prospects/best means to mitigate the latter), which then feeds into more or less 'generalized techno-optimism'.

But I'd guess the majority of the action is around the 'modal XR account' of XR being a great moral priority, which can be significantly reduced, and is substantially composed of risks from emerging technology. "Technocircumspection" seems a fairly sound corollary from this set of controversial conjuncts.   

I see myself as straddling the line between the two communities. More rigorous arguments at the end, but first, my offhand impressions of what I think the median EA/XR person beliefs:

  • Ignoring XR, economic/technological progress is an immense moral good
  • Considering XR, economic progress is somewhat good, neutral at worst
  • The solution to AI risk is not "put everything on hold until we make epistemic progress"
  • The solution to AI risk is to develop safe AI
  • In the meantime, we should be cautious of specific kinds of development, but it's fine if someone wants to go and improve crop yields or whatever

As Bostrom wrote in 2003: "In light of the above discussion, it may seem as if a utilitarian ought to focus her efforts on accelerating technological development. The payoff from even a very slight success in this endeavor is so enormous that it dwarfs that of almost any other activity. We appear to have a utilitarian argument for the greatest possible urgency of technological development."

"However, the true lesson is a different one. If what we are concerned with is (something like) maximizing the expected number of worthwhile lives that we will create, then in addition to the opportunity cost of delayed colonization, we have to take into account the risk of failure to colonize at all. We might fall victim to an existential risk, one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.[8] Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years." https://www.nickbostrom.com/astronomical/waste.html

With regards to poverty reduction, you might also like this post in favor of growth: http://reflectivedisequilibrium.blogspot.com/2018/10/flow-through-effects-of-innovation.html

Thanks ADS. I'm pretty close to agreeing with all those bullet points actually?

I wonder if, to really get to the crux, we need to outline what are the specific steps, actions, programs, investments, etc. that EA/XR and PS would disagree on. “Develop safe AI” seems totally consistent with PS, as does “be cautious of specific types of development”, although both of those formulations are vague/general.

Re Bostrom:

a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.

By the same logic, would a 0.001% reduction in XR be worth a delay of 10,000 years? Because that seems like the kind of Pascal's Mugging I was talking about.

(Also for what it's worth, I think I'm more sympathetic to the “person-affecting utilitarian” view that Bostrom outlines in the last section of that paper—which may be why I learn more towards speed on the speed/safety tradeoff, and why my view might change if we already had immortality. I wonder if this is the crux?)

Side note:  Bostrom does not hold or argue for 100% weight on total utilitarianism such as to take overwhelming losses on other views for tiny gains on total utilitarian stances. In Superintelligence he specifically rejects an example extreme tradeoff of that magnitude (not reserving one galaxy's worth of resources out of millions for humanity/existing beings even if posthumans would derive more wellbeing from a given unit of resources).

I also wouldn't actually accept a 10 million year delay in tech progress (and the death of all existing beings who would otherwise have enjoyed extended lives from advanced tech, etc) for a 0.001% reduction in existential risk.

Good to hear!

In the abstract, yes, I would trade 10,000 years for 0.001% reduction in XR.

In practice, I think the problem with this kind of Pascal Mugging argument is that it's really hard to know what a 0.001% reduction looks like, and really easy to do some fuzzy Fermi estimate math. If someone were to say "please give me one billion dollars, I have this really good idea to prevent XR by pursuing Strategy X", they could probably convince me that they have at least a 0.001% chance of succeeding. So my objections to really small probabilities are mostly practical.

Thanks for writing this post! I'm a fan of your work and am excited for this discussion.

Here's how I think about costs vs benefits:

I think XR reduction is at least 1000x as bad as a GCR that was guaranteed not to turn into an x-risk. The future is very long, and humanity seems able to achieve a very good one, but looks currently very vulnerable to me.

I think I can have a tractable impact on reducing that vulnerability. It doesn't seem to me that my impact on human progress would equal my chance of saving it. Obviously that needs some fleshing out — what is my impact on x-risk, what is my impact on progress, how likely am I to have those impacts, etc. But that's the structure of how I think about it.

After initially worrying about pascal's mugging, I've come to believe that x-risk is in fact substantially more likely than 1 in several million, and whatever objections I might have to pascal's mugging don't really apply.

How I think about tech progress:

From an x-risk perspective, I'm pretty ambivalent about tech progress. I've heard arguments that it's good, and that it's bad, but mostly I think it's not a very predictably-large effect on the margin.

But while I care a lot about x-risk reduction, I have different world-views that I put substantial credence in as well. And basically all of those other world-views care a whole lot about human progress. So while I don't view human progress as the cause of my life the way I do x-risk reduction, I'm strongly in favor of more of it.

Finally, as you can imagine from my last answer, I definitely have a lot of conversations where I try to convey my optimism about technology's ability to make lives better. And I think that's pretty common — your blog is well-read in my circles.

Thanks JP!

Minor note: the “Pascal's Mugging” isn't about the chance of x-risk itself, but rather the delta you can achieve through any particular program/action (vs. the cost of that choice).

By that token most particular scientific experiments or contributions to political efforts may be such: e.g. if there is a referendum to pass a pro-innovation regulatory reform and science funding package, a given donation or staffer in support of it is very unlikely to counterfactually tip it into passing, although the expected value and average returns could be high, and the collective effort has a large chance of success.

Sure, but the delta you can achieve with anything is small, depending on how you delineate an action. True, x-risk reduction is on the more extreme end of this spectrum, but I think the question should be "can these small deltas/changes add up to a big delta/change? (vs. the cost of that choice)" and the answer to that seems to be "yes."

Is your issue more along the following?

  1. Humans are bad at estimating very small percentages accurately, and can be orders of magnitudes off (and the same goes for astronomical values in the long-term future)
  2. Arguments for the cost-effectiveness of x-risk reduction rely on estimating very small percentages (and the same goes for astronomical values in the long-term future)
  3. (Conlusion) Arguments for the cost-effectiveness of x-risk reduction cannot be trusted.

If so, I would reject 2, because I believe we shouldn't try to quantify things at those levels of precision. This does get us to your question "How does XR weigh costs and benefits?", which I think is a good question to which I don't have a great answer to. It would be something along the lines of "there's a grey area where I don't know how to make those tradeoffs, but most things do not fall into the grey area so I'm not worrying too much about this. If I wouldn't fund something that supposedly reduces x-risk, it's either that I think it might increase x-risk, or because I think there are better options available for me to fund". Do you believe that many more choices fall into that grey area?

Imagine we can divide up the global economy into natural clusters. We'll refer to each cluster as a "Global Project." Each Global Project consists of people and their ideas, material resources, institutional governance, money, incentive structures, and perhaps other factors.

Some Global Projects seem "bad" on the whole. They might have directly harmful goals, irresponsible risk management, poor governance, or many other failings. Others seem "good" on net. This is not in terms of expected value for the world, but in terms of the intrinsic properties of the GP that will produce that value.

It might be reasonable to assume that Global Project quality is normally distributed. One point of possible difference is the center of that distribution. Are most Global Projects of bad quality, neutral, or good quality?

We might make a further assumption that the expected value of a Global Project follows a power law, such that projects of extremely low or high quality produce exponentially more value (or more harm). Perhaps, if Q is project quality and V is value, . But we might disagree on the details of this power law.

One possibility is that in fact, it's easier to destroy the world than to improve the world. We might model this with two power laws, one for Q > 0 and one for Q < 0, like so:

  • , Q >= 0
  • , Q < 0

In this case, whether or not progress is good will depend on the details of our assumptions about both the project quality distribution and the power law for expected value:

  • The size of N, and whether or not the power law is uniform or differs for projects of various qualities. Intuitively, "is it easier for a powerful project to improve or destroy the world, and how much easier?"
  • How many standard deviations away from zero the project quality distribution is centered, and in which direction. Intuitively, "are most projects good or bad, and how much?"

In this case, whether or not average expected value across many simulations of such a model is positive or negative can hinge on small alterations of the variables. For example, if we set N = 7 for bad projects and N = 3 for good projects, but we assume that the average project quality is +0.6 standard deviations from zero, then average expected value is mildly negative. At project quality +0.7 standard deviations from zero, the average expected value is mildly positive.

Here's what an X-risk "we should slow down" perspective might look like. Each plotted point is a simulated "world." In this case, the simulation produces negative average EV across simulated worlds.

And here is a Progress Studies "we should speed up" perspective might look like, with positive average EV.

The joke is that it's really hard to tell these two simulations apart. In fact, I generated the second graph by altering the center point of the project quality distribution 0.01 standard deviations to the right relative to the first graph. In both case, a lot of the expected value is lost to a few worlds in which things go cataclysmically wrong.

One way to approach a double crux would be for adherents of the two sides to specify, in the spirit of "if it's worth doing, it's worth doing with made up statistics," their assumptions about the power law and project quality distribution, then argue about that. Realistically, though, I think both sides understand that we don't have any realistic way of saying what those numbers ought to be. Since the details matter on this question, it seems to me that it would be valuable to find common ground.

For example, I'm sure that PS advocates would agree that there are some targeted risk-reduction efforts that might be good investments, along with a larger class of progress-stimulating interventions. Likewise, I'm sure that XR advocates would agree that there are some targeted tech-stimulus projects that might be X-risk "security factors." Maybe the conversation doesn't need to be about whether "more progress" or "less progress" is desirable, but about the technical details of how we can manage risk while stimulating growth.

Regarding your question:

Does XR consider tech progress default-good or default-bad?

Leopold Aschenbrenner's paper Existential risk and growth provides one interesting perspective on this question (note that while I find the paper informative, I don't think it settles the question).

A key question the paper seeks to address is this:

Does faster economic growth accelerate the development of dangerous newtechnologies, thereby increasing the probability of an existential catastrophe?

The paper's (preliminary) conclusion is 

we could be living in a unique “time of perils,” having developed technologies advanced enough to threaten our permanent destruction, but not having grown wealthy enough yet to be willing to spend sufficiently on safety. Accelerating growth during this “time of perils” initially increases risk, but improves the chances of humanity’s survival in the long run. Conversely, even short-term stagnation could substantially curtail the future of humanity.

Aschenbrenner's model strikes me as a synthesis of the two intellectual programmes, and it doesn't get enough attention.

How does XR weigh costs and benefits?
Does XR consider tech progress default-good or default-bad?

The core concept here is differential intellectual progress. Tech progress can be bad if it reorders the sequence of technological developments to be worse, by making a hazard precede its mitigation. In practice, that applies mainly to gain-of-function research and to some, but not all, AI/ML research. There are lots of outstanding disagreements between rationalists about which AI/ML research is good vs bad, which, when zoomed in on, reveal disagreements about AI timeline and takeoff forecasts, and about the feasibility of particular AI-safety research directions.

Progress in medicine (especially aging- and cryonics-related medicine) is seen very positively (though there's a deep distrust of the existing institutions in this area, which bottoms out in a lot of rationalists doing their own literature reviews and wishing they could do their own experiments).

On a more gut/emotional level, I would plug my own Petrov Day ritual as attempting to capture the range of it: it's a mixed bag with a lot of positive bits, and some terrifying bits, and the core message is that you're supposed to be thinking about both and not trying to oversimplify things.

What would moral/social progress actually look like?

This seems like a good place to mention Dath Ilan, Eliezer's fictional* universe which is at a much higher level of moral/social progress, and the LessWrong Coordination/Cooperation tag, which has some research pointing in that general direction.

What does XR think about the large numbers of people who don't appreciate progress, or actively oppose it?

I don't think I know enough to speak about the XR community broadly here, but as for me personally: mostly frustrated that their thinking isn't granular enough. There's a huge gulf between saying "social media is toxic" and saying "it is toxic for the closest thing to a downvote button to be reply/share", and I try to tune out/unfollow the people whose writings say things closer to the former.

You mention that some EAs oppose progress / think that it is bad. I might be wrong, but I think these people only "oppose" progress insofar as they think x-risk reduction from safety-based investment is even better value on the margin. So it's not that they think progress is bad in itself, it's just that they think that speeding up progress incurs a very large opportunity  cost. Bostrom's 2003 paper outlines the general reasoning why many EAs think x-risk reduction is more important than quick technological development.

Also, I think most EAs interested in x-risk reduction would say that they're not really in a Pascal's mugging as the reduction in probability of an existential catastrophe occurring that can be achieved isn't astronomically small. This is partly because x-risk reduction is so neglected that there's still a lot of low-hanging fruit.

I'm not super certain on either of the points above but it's the sense I've gotten from the community.

To be honest this parsing of these two communities that have a ton in common reminds me of this great scene from Monty Python's Life of Brian: 

Regarding your question:

What would moral/social progress actually look like?

This is a big and difficult question, but here are some pointers to relevant concepts and resources:

  • Moral circle expansion (MCE) - MCE is "the attempt to expand the perceived boundaries of the category of moral patients." For instance, this could involve increasing the moral concern in the wider public (or, more  targeted, among societal decision-makers) for non-human animals or future people. Arguably,  MCE could help reduce the risk of societies committing further atrocities like factory farming and also increase the resources spent on existential risk mitigation (as there is a greater concern for the welfare of future people).
  • Improving institutional decision-making - this covers a very broad range of interventions, including, for instance, voting reform. The case for it is that "Improving the quality of decision-making in important institutions could improve our ability to solve almost all other problems. It could also help society’s ability to identify “unknown unknowns” – problems we haven’t even thought of yet – and to mitigate all global catastrophic risks".
  • See also this list of resources on differential progress (and in particular, Differential Intellectual Progress as a Positive-Sum Project)
  • Global priorities research (GPR) - arguably, a key priority for GPR is to provide answers to the above question.  For instance, this might involve rigorously investigating the plausibility of longtermism in the light of objections (such as the epistemic objection).
  • The Possibility of an Ongoing Moral Catastrophe - a very interesting paper arguing "for believing that our society is unknowingly guilty of serious, large-scale wrongdoing.". The paper ends by making two suggestions relevant to the above question: "The article then discusses what our society should do in light of the likelihood that we are doing something seriously wrong: we should regard intellectual progress, of the sort that will allow us to find and correct our moral mistakes as soon as possible, as an urgent moral priority rather than as a mere luxury; and we should also consider it important to save resources and cultivate flexibility, so that when the time comes to change our policies we will be able to do so quickly and smoothly."

[Likely not a crux]

EA often uses an Importance - Neglectedness - Tractability framework for cause prioritization.  I would expect things producing progress to be somewhat less neglected than working on XR; it is still somewhat possible to capture some of the benefits. 

We do indeed see vast amounts of time and money being spent on research and development, in comparison to the amount being spent on XR concerns. Possibly you'd prefer to compare with PS itself, rather than with all R&D? (a) I'm not sure how justified that is; (b) it still feels to me like it ought to be possible to capture some of the benefits from many of PS's proposed changes; (c)  my weak impression is that PS (or things similar to PS- meta-improvements to progress) is still less neglected, and in particular that lots of people who don't explicitly  identify as being part of PS are still working on related concerns.

So here's a list of claims, with a cartoon response from someone that represents my impression of a typical EA/PS view on things (insert caveats here):

  1. Some important parts of "developed world" culture are too pessimistic. It would be very valuable to blast a message of definite optimism, viz. "The human condition can be radically improved! We have done it in the past, and we can do it again. Here are some ideas we should try..."

PS: Strongly agree. The cultural norms that support and enable progress are more fragile than you think.

EA: Agree. But, as an altruist, I tend to focus on preventing bad stuff rather than making good stuff happen (not sure why...).

  1. Broadly, "progress" comes about when we develop and use our capabilities to improve the human condition, and the condition of other moral patients (~sentient beings).

PS: Agree, this gloss seems basically fine for now.

EA: Agree, but we really need to improve on this gloss.

  1. Progress comes in different kinds: technological, scientific, ethical, global coordination. At different times in history, different kinds will be more valuable. Balancing these capabilities matters: during some periods, increasing capabilities in one area (or a subfield of one area) may be disvaluable (c.f. Vulnerable World Hypothesis).

EA & PS: Seems right. Maybe we disagree on where the current margins are?

  1. Let's try not to destroy ourselves! The future could be wonderful!

EA & PS: Yeah, duh. But also eek—we recognise the dangers ahead.

  1. Markets and governments are quite functional, so that means there's much more low hanging fruit in pursuing the interests of those who these systems aren't at all built to serve (e.g. future generations, animals).

PS: Hmm, take a closer look. There are a lot of trillion dollar bills lying around, even in areas where an optimistic EMH would say that markets and government ought to do well.

EA: So I used to be really into the EMH. These days, I'm not so sure...

  1. Broadly promoting industrial literacy is really important.

PS: Yes!

EA: I haven't thought about this much. Quick thought is that I'm happy to see some people working on this. I doubt it's the best option for many of the people we speak to, but it could be a good option for some.

  1. We can make useful predictions about the effects of new technologies.

PS (David Deutsch): I might grudgingly accept an extremely weak formulation of this claim. At least on Fridays. And only if you don't try to explicitly assign probabilities.

EA: Yes.

  1. You might be missing a crucial consideration!

PS: What's that? Oh, I see. Yeah. Well... I'm all for thinking hard about things, and acting on the assumption that I'm probably wrong about mostly everything. In the end, I guess I'm crossing my fingers, and hoping we can learn by trial and error, without getting ourselves killed. Is there another option?

EA: I know. This gives me nightmares.

On Max Daniel's thread, I left some general comments, a longer list of questions to which PS/EA might give different answers, and links to some of the discussions that shaped my perspective on this.

  1. How do you give advice?

PS (Tyler Cowen): I think about what I believe, then I think about what it's useful for people to hear, and then I say that.

EA: I think about what I believe, and then I say that. I generally trust people to respond appropriately to what I say.

I think it's more like:

EA: I think about what I believe. Then I think about whether this is an information hazard, and discuss this possibility with a lot of my friends.  Then I say it in a way that makes a lot of sense to people who are a lot like me, but I don't much think about other audiences.

For those of us who are unfamiliar with Progress Studies, I think it would help if you clarified what exactly that community thinks or advocates.

Is the idea simply to prioritize economic growth? Is it about increasing the welfare of people alive today / in the near future? Would distributing malaria bed nets count as something that a Progress Studies person might advocate qua Progress Studies advocate? Or is it about technological development specifically? (If bed nets don't count as Progress Studies, would development of CRISPR technologies to eradicate malaria count? If yes, why? (Assume the CRISPR technology has the same estimated cost-effectiveness for preventing malaria deaths as bed nets over the next century.))

(Context note: I read this post, all the comments, then Ben Todd's question on your AMA, then your Progress Studies as Moral Imperative post. I don't really know anything about Progress Studies besides this context, but will offer my thoughts now below in the hope it will help with identifying the crux.)

None of the comments so far have engaged with your road trip metaphor, so I'll bite:

In your Progress Studies as Moral Imperative post it sounds like you're concerned that humanity might just slow the car down, stop, and just stay there indefinitely or something due to a lack of appreciation or respect for progress. Is that right?

Personally I think that sounds very unlikely and I don't feel concerned at all about that. I think nearly all other longtermists would probably agree.

This first thing your Moral Imperative post made me think of is Factfulness by Rosling et al. Before reading the book in 2019 I had often heard the idea that roughly "people don't know how much progress we've made lately." I felt like I heard several people say this for a few years without actually encountering the people who were ignorant about the progress.

In the beginning of Factfulness Rosling talks about how a bunch of educated people on a UN council (or something) were ignorant of basic facts of humanity's progress in recent decades. I defer to his claim and yours that these people who are ignorant of the progress we've made exist.

That said, when I took the pre-test quiz at the beginning of the book about the progress we've made I got all of his questions right, and I was quite confident in essentially all of the answers. I recall thinking that other people I know (in the EA community, for example) would probably also get all the questions correct, despite the poor performance on the same quiz by world leaders and other audiences that Rosling spoke to over the years.

I say all this to suggest that maybe Progress Studies people are reactionary to some degree and longtermists (what you're calling "EA/XR" people) aren't? (Maybe PS people are used to seeing a lot of people in society (including some more educated and tech people) being ignorant of progress and opposed to it, while maybe EA people have experienced less of this or just don't care to be as reactionary to such people?) Could this be a crux? Longtermists just aren't very concerned that we're going to stop progressing (besides potentially crashing the car--existential risk or global catastrophic risk), whereas Progress Studies people are more likely to think that progress is slowing and coming to a stop?

"EA/XR" is a rather confusing term. Which do you want to talk about, EA or x-risk studies?

It is a mistake to consider EA and progress studies as equivalent or mutually exclusive. Progress studies is strictly an academic discipline. EA involves building a movement and making sacrifices for the sake of others. And progress studies can be a part of that, like x-risk.

Some people in EA who focus on x-risk may have differences of opinion with those in the field of progress studies.

First, PS is almost anything but an academic discipline (even though that's the context in which it was originally proposed). The term is a bit of a misnomer; I think more in terms of there being (right now) a progress community/movement.

I agree these things aren't mutually exclusive, but there seems to be a tension or difference of opinion (or at least difference of emphasis/priority) between folks in the “progress studies” community, and those in the “longtermist EA” camp who worry about x-risk (sorry if I'm not using the terms with perfect precision). That's what I'm getting at and trying to understand.

OK, sorry for misunderstanding.

I make an argument here that marginal long run growth is dramatically less important than marginal x-risk. I'm not fully confident in it. But the crux could be what I highlight - whether society is on an endless track of exponential growth, or on the cusp of a fantastical but fundamentally limited successor stage. Put more precisely, the crux of the importance of x-risk is how good the future will be, whereas the crux of the importance of progress is whether differential growth today will mean much for the far future.

I would still ceteris paribus pick more growth rather than less, and from what I've seen of Progress Studies researchers, I trust them to know how to do that well.

It's important to compare with long-term political and social change too. Arguably a higher priority than either effort, but also something that can be indirectly served by economic progress. One thing the progress studies discourse has persuaded me of is that there is some social and political malaise that arises when society stops growing. Healthy politics may require fast nonstop growth (though that is a worrying thing if true).

Curated and popular this week
Relevant opportunities